12 research outputs found

    Local Dominance

    Full text link
    We define a local notion of dominance that speaks to the true choice problems among actions in a game tree. When we do not restrict players' ability to do contingent reasoning, a reduced strategy is weakly dominant if and only if it prescribes a locally dominant action at every decision node, therefore any dynamic decomposition of a direct mechanism that preserves strategy-proofness is robust to the lack of global planning. Under a form of wishful thinking, we also show that strategy-proofness is robust to the lack of forward planning. Moreover, we identify simple forms of contingent reasoning and foresight, driven by the local viewpoint. We construct a dynamic game that implements the Top Trading Cycles allocation in locally dominant actions under these simple forms of reasoning

    Cautious Belief and Iterated Admissibility

    Full text link
    We define notions of cautiousness and cautious belief to provide epistemic conditions for iterated admissibility in finite games. We show that iterated admissibility characterizes the behavioral implications of "cautious rationality and common cautious belief in cautious rationality" in a terminal lexicographic type structure. For arbitrary type structures, the behavioral implications of these epistemic assumptions are characterized by the solution concept of self-admissible set (Brandenburger, Friedenberg and Keisler 2008). We also show that analogous conclusions hold under alternative epistemic assumptions, in particular if cautiousness is "transparent" to the players. KEYWORDS: Epistemic game theory, iterated admissibility, weak dominance, lexicographic probability systems. JEL: C72

    Backward induction reasoning beyond backward induction

    Get PDF
    Backward Induction is a fundamental concept in game theory. As an algorithm, it can only be used to analyze a very narrow class of games, but its logic is also invoked, albeit informally, in several solution concepts for games with imperfect or incomplete informa-tion (Subgame Perfect Equilibrium, Sequential Equilibrium, etc.). Yet, the very meaning of ‘backward induction reasoning’ is not clear in these settings, and we lack a way to apply this simple and compelling idea to more general games. We remedy this by introducing a solution concept for games with imperfect and incomplete information, Backwards Rational-izability, that captures precisely the implications of backward induction reasoning. We show that Backwards Rationalizability satisfies several properties that are normally ascribed to backward induction reasoning, such as: (i) an incomplete-information extension of subgame consistency (continuation-game consistency); (ii) the possibility, in finite horizon games, of being computed via a tractable backwards procedure; (iii) the view of unexpected moves as mistakes; (iv) a characterization of the robust predictions of a ‘perfect equilibrium’ notion that introduces the backward induction logic and nothing more into equilibrium analysis. We also discuss a few applications, including a new version of peer-confirming equilibrium (Lipnowski and Sadler (2019)) that, thanks to the backward induction logic distilled by Backwards Rationalizability, restores in dynamic games the natural comparative statics the original concept only displays in static settings

    Backward induction reasoning beyond backward induction

    Get PDF
    Backward Induction is a fundamental concept in game theory. As an algorithm, it can only be used to analyze a very narrow class of games, but its logic is also invoked, albeit informally, in several solution concepts for games with imperfect or incomplete informa-tion (Subgame Perfect Equilibrium, Sequential Equilibrium, etc.). Yet, the very meaning of ‘backward induction reasoning’ is not clear in these settings, and we lack a way to apply this simple and compelling idea to more general games. We remedy this by introducing a solution concept for games with imperfect and incomplete information, Backwards Rational-izability, that captures precisely the implications of backward induction reasoning. We show that Backwards Rationalizability satisfies several properties that are normally ascribed to backward induction reasoning, such as: (i) an incomplete-information extension of subgame consistency (continuation-game consistency); (ii) the possibility, in finite horizon games, of being computed via a tractable backwards procedure; (iii) the view of unexpected moves as mistakes; (iv) a characterization of the robust predictions of a ‘perfect equilibrium’ notion that introduces the backward induction logic and nothing more into equilibrium analysis. We also discuss a few applications, including a new version of peer-confirming equilibrium (Lipnowski and Sadler (2019)) that, thanks to the backward induction logic distilled by Backwards Rationalizability, restores in dynamic games the natural comparative statics the original concept only displays in static settings

    A case for transparency in principal-agent relationships

    Full text link
    When is transparency optimal for the principal in principal-agent relationships? We consider the following setting. The principal has private information that affects the agent's incentives to exert effort. Higher effort leads to higher material utility for both parties but the agent bears the cost of effort. The principal can share her information with the agent and can commit to any information structure. We obtain interpretable and easily verifiable sufficient conditions for the optimality of full disclosure. With this, we show that full disclosure is optimal under some modeling assumptions commonly used in applied principal-agent papers

    Local dominance

    No full text
    tru

    Belief change, rationality, and strategic reasoning in sequential games

    No full text
    A central aspect of strategic reasoning in sequential games consists in anticipating how co-players would react to information about past play, which in turn depends on how co-players update and revise their beliefs. Several notions of belief system have been used to model how players' beliefs change as they obtain new information, some imposing considerably more discipline than others on how beliefs at different information sets are related. We highlight the differences between these notions of belief system in terms of introspection about one's own conditional beliefs, but we also show that such differences do not affect the essential aspects of rational planning and the behavioral implications of strategic reasoning, as captured by rationalizability
    corecore